Left unchecked, generative AI spells disaster
Governments and cybersecurity industry can together regulate generative AI without stifling innovation
image for illustrative purpose
Generative AI can also allow cybercriminals to target vulnerable people more selectively. For instance, training a system on information stolen from major companies, as in the Optus or Medibank hacks last year, could help criminals target elderly people, people with disabilities or those in a financial crisis
The generative AI industry will be worth about (AUD) 22 trillion by 2030, according to CSIRO. These systems – of which ChatGPT is currently the best known – can write essays and code, generate music and artwork and have the entire conversation. The more pertinent question is to understand what happens when they're turned to illegal uses.
Last week, the streaming community was rocked by a headline that links back to the misuse of generative AI. Popular Twitch streamer Atrioc issued an apology video, teary-eyed, after being caught viewing pornography with the superimposed faces of other women streamers. The "deepfake" technology needed to Photoshop a celebrity's head on a porn actor's body has been around for a while, but recent advances have made it harder to detect. And that's just the tip of the iceberg.
In the wrong hands, generative AI could do untold damage. There's a lot we stand to lose, should laws and regulations fail to keep pace. A month back, Lensa came under fire for allowing its system to create fully nude and hyper-sexualised images from users' headshots. Controversially, it also whitened the skin of women of colour and made their features more European. The backlash was swift. But what's relatively overlooked is the vast potential to use artistic generative AI in scams.
At the far end of the spectrum, there are reports of these tools being able to fake fingerprints and facial scans (the method most of us use to lock our phones). Criminals are quickly finding new ways to use generative AI to improve the frauds they already perpetrate. The lure of generative AI in scams comes from its ability to find patterns in large amounts of data.
Cyber-security has seen a rise in 'bad bots': malicious automated programs that mimic human behaviour to conduct crime. Generative AI will make these even more sophisticated and difficult to detect. Ever received a scam text from the 'tax office', claiming that you had a refund waiting? Or maybe you got a call claiming a warrant was out for your arrest. In such scams, generative AI is used to improve the quality of the texts or emails, making them much more believable. For example, in recent years, we've seen AI systems being used to impersonate important figures in 'voice spoofing' attacks.
And then there are romance scams, where criminals pose as romantic interests and ask their targets for money to help them out of a financial distress. These scams are already widespread and often lucrative. Training AI on actual messages between intimate partners could help create a scam chatbot that's indistinguishable from a human.
Generative AI can also allow cybercriminals to target vulnerable people more selectively. For instance, training a system on information stolen from major companies, as in the Optus or Medibank hacks last year, could help criminals target elderly people, people with disabilities or those in a financial crisis. Further, these systems can be used to improve computer code, which some cybersecurity experts say will make malware and viruses easier to create and harder to detect for antivirus software. The technology is here, and we aren't prepared.
Governments of Australia and New Zealand have published frameworks relating to AI, but they aren't binding rules. Their laws pertaining to privacy, transparency and freedom from discrimination aren't up to the task, as far as AI's impact is concerned. This puts us behind the rest of the world. The US has had a legislated National Artificial Intelligence Initiative in place since 2021. And since 2019, it has been illegal in California for a bot to interact with users for commerce or electoral purposes without disclosing it's not human. Meanwhile, the European Union is also well on its way to enacting the world's first AI law.
The AI Act bans certain types of AI programs posing 'unacceptable risk' – like those used by China's social credit system – and imposes mandatory restrictions on high-risk systems. Although asking ChatGPT to break the law results in warnings that "planning or carrying out a serious crime can lead to severe legal consequences", the fact is that there is no requirement for these systems to have a 'moral code' programmed into them. There may be no limit to what they can be asked to do and criminals will likely figure out workarounds for any rules intended to prevent their illegal use.
Governments need to work closely with the cybersecurity industry to regulate generative AI without stifling innovation, such as by requiring ethical considerations for AI programs. The Australian government should use the upcoming Privacy Act review to get ahead of potential threats from generative AI to one's online identities. On that count, New Zealand's Privacy, Human Rights and Ethics Framework is a positive step.
We also need to be more cautious as a society about believing what we see online and remember that humans are traditionally bad at being able to detect fraud. Can you spot a scam? As criminals add generative AI tools to their arsenal, spotting scams will only get trickier. The classic tips will still apply. But beyond those, we'll learn a lot from assessing the ways in which these tools fall short.
Generative AI is bad at critical reasoning and conveying emotion. It can even be tricked into giving wrong answers. Knowing when and why this happens could help us develop effective methods to catch cybercriminals using AI for extortion. Tools are being developed to detect AI outputs from tools such as ChatGPT. These could go a long way in preventing AI-based cybercrime if they prove to be effective.